这项研究通过对三种不同类型的模型进行基准评估来调查机器学习模型对产生反事实解释的影响:决策树(完全透明,可解释的,白色盒子模型),随机森林(一种半解释,灰色盒模型)和神经网络(完全不透明的黑盒模型)。我们在五个不同数据集(Compas,成人,德国,德语,糖尿病和乳腺癌)中使用四种算法(DICE,WatchERCF,原型和GrowingSpheresCF)测试了反事实生成过程。我们的发现表明:(1)不同的机器学习模型对反事实解释的产生没有影响; (2)基于接近性损失函数的唯一算法是不可行的,不会提供有意义的解释; (3)在不保证反事实生成过程中的合理性的情况下,人们无法获得有意义的评估结果。如果对当前的最新指标进行评估,则不考虑其内部机制中不合理的算法将导致偏见和不可靠的结论; (4)强烈建议对定性分析(以及定量分析),以确保对反事实解释和偏见的潜在识别进行强有力的分析。
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
We present a method for providing statistical guarantees on runtime safety and goal reachability for integrated planning and control of a class of systems with unknown nonlinear stochastic underactuated dynamics. Specifically, given a dynamics dataset, our method jointly learns a mean dynamics model, a spatially-varying disturbance bound that captures the effect of noise and model mismatch, and a feedback controller based on contraction theory that stabilizes the learned dynamics. We propose a sampling-based planner that uses the mean dynamics model and simultaneously bounds the closed-loop tracking error via a learned disturbance bound. We employ techniques from Extreme Value Theory (EVT) to estimate, to a specified level of confidence, several constants which characterize the learned components and govern the size of the tracking error bound. This ensures plans are guaranteed to be safely tracked at runtime. We validate that our guarantees translate to empirical safety in simulation on a 10D quadrotor, and in the real world on a physical CrazyFlie quadrotor and Clearpath Jackal robot, whereas baselines that ignore the model error and stochasticity are unsafe.
translated by 谷歌翻译
Diffusion models are state-of-the-art deep learning empowered generative models that are trained based on the principle of learning forward and reverse diffusion processes via progressive noise-addition and denoising. To gain a better understanding of the limitations and potential risks, this paper presents the first study on the robustness of diffusion models against backdoor attacks. Specifically, we propose BadDiffusion, a novel attack framework that engineers compromised diffusion processes during model training for backdoor implantation. At the inference stage, the backdoored diffusion model will behave just like an untampered generator for regular data inputs, while falsely generating some targeted outcome designed by the bad actor upon receiving the implanted trigger signal. Such a critical risk can be dreadful for downstream tasks and applications built upon the problematic model. Our extensive experiments on various backdoor attack settings show that BadDiffusion can consistently lead to compromised diffusion models with high utility and target specificity. Even worse, BadDiffusion can be made cost-effective by simply finetuning a clean pre-trained diffusion model to implant backdoors. We also explore some possible countermeasures for risk mitigation. Our results call attention to potential risks and possible misuse of diffusion models.
translated by 谷歌翻译
分散的多基金会计划一直是机器人技术研究的重要领域。该领域中有趣且有影响力的应用是在未结构化的道路环境中分散的车辆协调。例如,在十字路口中,在没有中央协调员的情况下,在相交路径的多个车辆上解除多种车辆是有用的。我们从常识中学到的是,要使车辆浏览这种未建筑的环境,驾驶员必须理解并符合附近驾驶员观察到的隐式“社会礼节”。为了研究这种隐式驾驶协议,我们收集了伯克利DeepDrive无人机数据集。该数据集包含1)一组航空视频记录未结构化驾驶,2)图像和注释的集合来训练车辆检测模型,3)一个用于说明典型用法的开发脚本套件。我们认为,该数据集是研究人类驱动因素和次要兴趣的分散多种计划的主要兴趣,用于遥感环境中的计算机视觉。
translated by 谷歌翻译
逆文本归一化(ITN)用于将自动语音识别(ASR)系统的口语输出转换为书面形式。传统手工制作的ITN规则可以复杂地转录和维护。同时,神经建模方法需要与ASR系统相同或相似的域(内域数据)中的质量大规模口语写作示例。这两种方法都需要昂贵且复杂的注释。在本文中,我们提出了一种数据增强技术,该技术可有效地从室外文本数据中产生丰富的口语写入数字对,并以最少的人类注释。我们从经验上证明,使用我们的数据增强技术训练的ITN模型始终超过ITN模型,该模型仅使用14.44%的总体准确性,仅在所有数字表面(例如红衣主教,货币和分数)上使用内域数据进行训练。
translated by 谷歌翻译
使用不平衡数据集的二进制分类具有挑战性。模型倾向于将所有样本视为属于多数类的样本。尽管现有的解决方案(例如抽样方法,成本敏感方法和合奏学习方法)提高了少数族裔类别的准确性,但这些方法受到过度拟合问题或难以决定的成本参数的限制。我们提出了HADR,这是一种降低尺寸的混合方法,包括数据块构建,降低性降低和与深度神经网络分类器的合奏学习。我们评估了八个不平衡的公共数据集的性能,从召回,g均值和AUC方面。结果表明,我们的模型优于最先进的方法。
translated by 谷歌翻译
我们为一类不确定的控制型非线性系统提供了一种运动计划算法,该系统可以在使用高维传感器测量值(例如RGB-D图像)和反馈控制循环中的学习感知模块时确保运行时安全性和目标达到性能。首先,给定状态和观察数据集,我们训练一个感知系统,该系统试图从观察结果中倒入状态的一部分,并估计感知错误上的上限,该误差有效,在数据附近有可信赖的域中具有很高的概率。接下来,我们使用收缩理论来设计稳定的状态反馈控制器和收敛的动态观察者,该观察者使用学习的感知系统来更新其状态估计。当该控制器在动力学和不正确状态估计中遇到错误时,我们会在轨迹跟踪误差上得出一个绑定。最后,我们将此绑定到基于采样的运动计划器中,引导它返回可以使用传感器数据在运行时安全跟踪的轨迹。我们展示了我们在4D汽车上模拟的方法,6D平面四极管以及使用RGB(-D)传感器测量的17D操纵任务,这表明我们的方法安全可靠地将系统转向了目标,而无法考虑的基线,这些基线无法考虑。受信任的域或状态估计错误可能不安全。
translated by 谷歌翻译
我们在信息理论安全保证下为高斯窃听通道设计了简短的区块长度代码。我们的方法在于将代码设计中的可靠性和保密性限制解耦。具体而言,我们通过自动编码器处理可靠性约束,并处理具有哈希功能的保密约束。对于小于或等于16的区块长度,我们通过模拟合法接收器的错误概率以及我们的代码构建中的窃听器的泄漏进行评估。这种泄漏被定义为机密信息和窃听通道观察之间的共同信息,并通过基于神经网络的共同信息估计器进行经验测量。我们的仿真结果提供了具有正面保密率的代码的示例,这些代码优于高斯窃听通道的非结构性可获得的最知名的保密率。此外,我们表明我们的代码设计适用于化合物和任意变化的高斯窃听通道,为此,通道统计信息不是完全知道的,但仅属于预先指定的不确定性集。这些模型不仅捕获了与渠道统计估计有关的不确定性,而且还捕获了窃听器堵塞合法传输或通过更改其位置来影响其自身渠道统计的场景。
translated by 谷歌翻译
本文介绍了DGNET,这是一个新颖的深层框架,可利用对象梯度监督的伪装对象检测(COD)。它将任务分为两个连接的分支,即上下文和纹理编码器。必不可少的连接是梯度诱导的过渡,代表上下文和纹理特征之间的软组。从简单但高效的框架中受益,DGNET以很大的利润优于现有的最新COD模型。值得注意的是,我们的高效版本DGNET-S实时运行(80 fps),并获得与尖端模型JCSOD-CVPR $ _ {21} $相当的结果,只有6.82%的参数。应用程序结果还表明,所提出的DGNET在息肉分割,缺陷检测和透明对象分割任务中表现良好。代码将在https://github.com/gewelsji/dgnet上提供。
translated by 谷歌翻译